试图预测资产价格将增加或减少的二进制分类器,自然会导致预测的交易策略,从而总是在市场上有一个位置。选择性分类扩展了二进制或多级分类器,以允许它避免对某些输入进行预测,从而允许所得到的选择分类器的准确性与输入特征空间的覆盖范围之间的折衷。选择性分类器会导致贸易策略,当分类器弃权时不采取交易职位。我们调查二元和三元选择性分类对交易策略设计的应用。对于Ternary分类,除了上课的课程上涨或下降之外,我们还包括第三个类,它对应于相对较小的价格在任一方向上移动,并给予分类器另一种方式来避免进行方向预测。我们使用前瞻性的火车验证 - 测试方法来评估和比较基于四种分类方法的几个不同特征集的二进制和三元,选择性和非选择性分类器:逻辑回归,随机森林,前馈和复发性神经网络网络。然后,我们将这些分类器转变为我们在商品期货市场上进行反向的交易策略。我们的经验结果展示了交易选择性分类的潜力。
translated by 谷歌翻译
Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.
translated by 谷歌翻译
跨语言转移学习已被证明在各种自然语言处理(NLP)任务中很有用,但是它在法律NLP的背景下被研究了,而在法律判断预测(LJP)中根本没有。我们使用三语瑞士判断数据集探索LJP上的转移学习技术,包括用三种语言编写的案例。我们发现,跨语性转移可以改善跨语言的总体结果,尤其是当我们使用基于适配器的微调时。最后,我们使用3倍较大的培训语料库使用机器翻译版本的原始文档的机器翻译版本来进一步提高模型的性能。此外,我们进行了一项分析,探讨了跨域和跨区域转移的效果,即跨域(法定区域)或地区培训模型。我们发现,在两个环境(法律领域,原产地地区)中,经过培训的所有小组的模型总体表现更好,而在最差的情况下,它们也改善了结果。最后,当我们雄心勃勃地应用跨寿司转移时,我们报告了改进的结果,在此我们通过印度法律案件进一步扩大数据集。
translated by 谷歌翻译
我们考虑使用最新的MultieRlex数据集中考虑法律主题分类中的零射击跨语性转移。由于原始数据集包含并行文档,这对于零拍传输不现实是不现实的,因此我们开发了一个没有并行文档的数据集的新版本。我们使用它来表明,基于翻译的方法非常优于多绘制预训练的模型,这是多曲线的最佳先前的零弹性传输方法。我们还开发了一种双语的教师零摄像转移方法,该方法利用了目标语言的其他未标记文档,并且比直接在标记的目标语言文档上进行微调的模型更好。
translated by 谷歌翻译
Laws and their interpretations, legal arguments and agreements\ are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.
translated by 谷歌翻译